5 research outputs found

    ENCRYPTION THREE-DIMENSION IMAGE USING TINY ALGORITHM

    Get PDF
    The development of systems allows providing the capability of using three-dimension (3D) pictures over the internet especially in social media. In previous years, animation pictures and videos are not used in the internet due to the sizes of these two data and need the huge amount of data to work over internet and need supporting program to deal with presenting the data to the users of the internet in either websites or social media. Most of the security over internet used on ciphering text or ciphering images but not cipher video or 3D picture because video and 3D pictures are not used until recently. The huge use of these two types 3D pictures and videos in recent years. It is become an urgent necessary to encrypt these sorts of data. The research will focus on encrypting these types of data by using special algorithm called as Tiny Encryption Algorithm (TEA). This algorithm will be used to encrypt and decrypt 3D pictures and protecting the privacy of this sort of data. The research shows the how-to encode and decode of 3D picture and how to deal with them. The results show the TEA is rapid algorithm in the coding picture and decoding 3D pictures. it is only needing a few portions of time to cipher and decipher 3D pictures. The program that used to test the ciphering and deciphering algorithm was based on MATLAB

    A new model for large dataset dimensionality reduction based on teaching learning-based optimization and logistic regression

    Get PDF
    One of the human diseases with a high rate of mortality each year is breast cancer (BC). Among all the forms of cancer, BC is the commonest cause of death among women globally. Some of the effective ways of data classification are data mining and classification methods. These methods are particularly efficient in the medical field due to the presence of irrelevant and redundant attributes in medical datasets. Such redundant attributes are not needed to obtain an accurate estimation of disease diagnosis. Teaching learning-based optimization (TLBO) is a new metaheuristic that has been successfully applied to several intractable optimization problems in recent years. This paper presents the use of a multi-objective TLBO algorithm for the selection of feature subsets in automatic BC diagnosis. For the classification task in this work, the logistic regression (LR) method was deployed. From the results, the projected method produced better BC dataset classification accuracy (classified into malignant and benign). This result showed that the projected TLBO is an efficient features optimization technique for sustaining data-based decision-making systems

    A new model for iris data set classification based on linear support vector machine parameter's optimization

    Get PDF
    Data mining is known as the process of detection concerning patterns from essential amounts of data. As a process of knowledge discovery. Classification is a data analysis that extracts a model which describes an important data classes. One of the outstanding classifications methods in data mining is support vector machine classification (SVM). It is capable of envisaging results and mostly effective than other classification methods. The SVM is a one technique of machine learning techniques that is well known technique, learning with supervised and have been applied perfectly to a vary problems of: regression, classification, and clustering in diverse domains such as gene expression, web text mining. In this study, we proposed a newly mode for classifying iris data set using SVM classifier and genetic algorithm to optimize c and gamma parameters of linear SVM, in addition principle components analysis (PCA) algorithm was use for features reduction

    Big Data Classification Efficiency Based on Linear Discriminant Analysis

    No full text
    The proliferation of online platforms recently has led to unprecedented increase in data generation; this has given rise to the concept of big data which characterizes data in terms of volume, velocity, variety, and veracity. One of the common multivariate statistical data analysis tools is linear discriminant analysis (LDA) which relies on the concept of obtaining the separation among groups through LDA. The prediction of the class of a given class of data points can be achieved through classification, a supervised learning technique but prior to a classification process, a classification model must first be built using classification algorithms. Several classification algorithms are available for prediction tasks. LDA is commonly used for the reduction of the dimensionality of datasets. In this article, the use of LDA to improve the classification performance of different classification model was presented

    Large Dataset Classification Using Parallel Processing Concept

    Get PDF
    Much attention has been paid to large data technologies in the past few years mainly due to its capability to impact business analytics and data mining practices, as well as the possibility of influencing an ambit of a highly effective decision-making tools. With the current increase in the number of modern applications (including social media and other web-based and healthcare applications) which generates high data in different forms and volume, the processing of such huge data volume is becoming a challenge with the conventional data processing tools. This has resulted in the emergence of big data analytics which also comes with many challenges. This paper introduced the use of principal components analysis (PCA) for data size reduction, followed by SVM parallelization. The proposed scheme in this study was executed on the Spark platform and the experimental findings revealed the capability of the proposed scheme to reduce the classifiers’ classification time without much influence on the classification accuracy of the classifier
    corecore